Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neuropsychologia ; 194: 108778, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38147907

RESUMO

Principal themes, particularly choruses in pop songs, hold a central place in human music. Singing along with a familiar chorus tends to elicit pleasure and a sense of belonging, especially in group settings. These principal themes, which frequently serve as musical rewards, are commonly preceded by distinctive musical cues. Such cues guide listeners' attention and amplify their motivation to receive the impending themes. Despite the significance of cue-theme sequences in music, the neural mechanisms underlying the processing of these sequences in unfamiliar songs remain underexplored. To fill this research gap, we employed fMRI to examine neural activity during the cued anticipation of unfamiliar musical themes and the subsequent receipt of their opening phrase. Twenty-three Taiwanese participants underwent fMRI scans while listening to excerpts of Korean slow pop songs unfamiliar to them, with lyrics they could not understand. Our findings revealed distinct temporal dynamics in lateral frontal activity, with posterior regions being more active during theme anticipation and anterior regions during theme receipt. During anticipation, participants reported substantial increases in arousal levels, aligning with the observed enhanced activity in the midbrain, ventral striatum, inferior frontal junction, and premotor regions. We posit that when motivational musical cues are detected, the ventral striatum and inferior frontal junction played a role in attention allocation, while premotor regions may be engaged in monitoring the theme's entry. Notably, both the anticipation and receipt of themes were associated with pronounced activity in the frontal eye field, dorsolateral prefrontal cortex, posterior parietal cortex, dorsal caudate, and salience network. Overall, our results highlight that within a naturalistic music-listening context, the dynamic interplay between the frontoparietal, dopaminergic midbrain-striatal, and salience networks could allow for precise adjustments of control demands based on the cue-theme structure in unfamiliar songs.


Assuntos
Sinais (Psicologia) , Música , Humanos , Música/psicologia , Motivação , Imageamento por Ressonância Magnética , Cognição , Mapeamento Encefálico
2.
Brain Cogn ; 169: 105987, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37126951

RESUMO

The major and minor modes in Western music have positive and negative connotations, respectively. The present fMRI study examined listeners' neural responses to switches between major and minor modes. We manipulated the final chords of J. S. Bach's keyboard pieces so that each major-mode passage ended with either the major (Major-Major) or minor (Major-Minor) tonic chord, and each minor-mode passage ended with either the minor (Minor-Minor) or major (Minor-Major) tonic chord. If the final major and minor chords have positive and negative reward values respectively, the Major-Minor and Minor-Major stimuli would cause negative and positive reward prediction errors (RPEs) respectively in a listener's brain. We found that activity in a frontoparietal network was significantly higher for Major-Minor than for Major-Major. Based on previous research, these results support the idea that a major-to-minor switch causes negative RPE. The contrast of Minor-Major minus Minor-Minor yielded activation in the ventral insula and visual cortex, speaking against the idea that a minor-to-major switch causes positive RPE. We discuss our results in relation to executive functions and the emotional connotations of major versus minor modes.


Assuntos
Imageamento por Ressonância Magnética , Música , Humanos , Imageamento por Ressonância Magnética/métodos , Música/psicologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Emoções , Processos Mentais , Percepção Auditiva/fisiologia
3.
Brain Sci ; 12(2)2022 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-35204024

RESUMO

When listening to music, people are excited by the musical cues immediately before rewarding passages. More generally, listeners attend to the antecedent cues of a salient musical event irrespective of its emotional valence. The present study used functional magnetic resonance imaging to investigate the behavioral and cognitive mechanisms underlying the cued anticipation of the main theme's recurrence in sonata form. Half of the main themes in the musical stimuli were of a joyful character, half a tragic character. Activity in the premotor cortex suggests that around the main theme's recurrence, the participants tended to covertly hum along with music. The anterior thalamus, pre-supplementary motor area (preSMA), posterior cerebellum, inferior frontal junction (IFJ), and auditory cortex showed increased activity for the antecedent cues of the themes, relative to the middle-last part of the themes. Increased activity in the anterior thalamus may reflect its role in guiding attention towards stimuli that reliably predict important outcomes. The preSMA and posterior cerebellum may support sequence processing, fine-grained auditory imagery, and fine adjustments to humming according to auditory inputs. The IFJ might orchestrate the attention allocation to motor simulation and goal-driven attention. These findings highlight the attention control and audiomotor components of musical anticipation.

4.
Brain Cogn ; 151: 105751, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33991840

RESUMO

The present study aimed at identifying the brain regions which preferentially responded to music with medium degrees of key stability. There were three types of auditory stimuli. Diatonic music based strictly on major and minor scales has the highest key stability, whereas atonal music has the lowest key stability. Between these two extremes, chromatic music is characterized by sophisticated uses of out-of-key notes, which challenge the internal model of musical pitch and lead to higher precision-weighted prediction error compared to diatonic and atonal music. The brain activity of 29 adults with excellent relative pitch was measured with functional magnetic resonance imaging while they listened to diatonic music, chromatic music, and atonal random note sequences. Several frontoparietal regions showed significantly greater response to chromatic music than to diatonic music and atonal sequences, including the pre-supplementary motor area (extending into the dorsal anterior cingulate cortex), dorsolateral prefrontal cortex, rostrolateral prefrontal cortex, intraparietal sulcus, and precuneus. We suggest that these frontoparietal regions may support working memory processes, hierarchical sequencing, and conflict resolution of remotely related harmonic elements during the predictive processing of chromatic music. This finding suggested a possible correlation between precision-weighted prediction error and the frontoparietal regions implicated in cognitive control.


Assuntos
Música , Adulto , Percepção Auditiva , Mapeamento Encefálico , Cognição , Humanos , Imageamento por Ressonância Magnética
5.
Brain Sci ; 9(10)2019 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-31652522

RESUMO

Tonal languages make use of pitch variation for distinguishing lexical semantics, and their melodic richness seems comparable to that of music. The present study investigated a novel priming effect of melody on the pitch processing of Mandarin speech. When a spoken Mandarin utterance is preceded by a musical melody, which mimics the melody of the utterance, the listener is likely to perceive this utterance as song. We used functional magnetic resonance imaging to examine the neural substrates of this speech-to-song transformation. Pitch contours of spoken utterances were modified so that these utterances can be perceived as either speech or song. When modified speech (target) was preceded by a musical melody (prime) that mimics the speech melody, a task of judging the melodic similarity between the target and prime was associated with increased activity in the inferior frontal gyrus (IFG) and superior/middle temporal gyrus (STG/MTG) during target perception. We suggest that the pars triangularis of the right IFG may allocate attentional resources to the multi-modal processing of speech melody, and the STG/MTG may integrate the phonological and musical (melodic) information of this stimulus. These results are discussed in relation to subvocal rehearsal, a speech-to-song illusion, and song perception.

6.
Neuropsychologia ; 133: 107073, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31026474

RESUMO

Music is frequently used to establish atmosphere and to enhance/alter emotion in dramas and films. During music listening, visual imagery is a common mechanism underlying emotion induction. The present functional magnetic resonance imaging (fMRI) study examined the neural substrates of the emotional processing of music and imagined scene. A factorial design was used with factors emotion valence (positive; negative) and music (withoutMUSIC: script-driven imagery of emotional scenes; withMUSIC: script-driven imagery of emotional scenes and simultaneously listening to affectively congruent music). The baseline condition was imagery of neutral scenes in the absence of music. Eleven females and five males participated in this fMRI study. Behavioural data revealed that during scene imagery, participants' subjective emotions were significantly intensified by music. The contrasts of positive and negative withoutMUSIC conditions minus the baseline (imagery of neutral scenes) showed no significant activation. When comparing the withMUSIC to withoutMUSIC conditions, activity in a number of emotion-related regions was observed, including the temporal pole (TP), amygdala, hippocampus, hypothalamus, anterior ventral tegmental area (VTA), locus coeruleus, and anterior cerebellum. We hypothesized that the TP may integrate music and the imagined scene to extract socioemotional significance, initiating the subcortical structures to generate subjective feelings and bodily responses. For the withMUSIC conditions, negative emotions were associated with enhanced activation in the posterior VTA compared to positive emotions. Our findings replicated and extended previous research which suggests that different subregions of the VTA are sensitive to rewarding and aversive stimuli. Taken together, this study suggests that emotional music embedded in an imagined scenario is a salient social signal that prompts preparation of approach/avoidance behaviours and emotional responses in listeners.


Assuntos
Encéfalo/diagnóstico por imagem , Emoções , Música , Estimulação Luminosa , Afeto , Tonsila do Cerebelo/diagnóstico por imagem , Tonsila do Cerebelo/fisiologia , Encéfalo/fisiologia , Tronco Encefálico/diagnóstico por imagem , Tronco Encefálico/fisiologia , Cerebelo/diagnóstico por imagem , Cerebelo/fisiologia , Feminino , Neuroimagem Funcional , Hipocampo/diagnóstico por imagem , Hipocampo/fisiologia , Humanos , Hipotálamo/diagnóstico por imagem , Hipotálamo/fisiologia , Locus Cerúleo/diagnóstico por imagem , Locus Cerúleo/fisiologia , Imageamento por Ressonância Magnética , Masculino , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia , Área Tegmentar Ventral/diagnóstico por imagem , Área Tegmentar Ventral/fisiologia , Adulto Jovem
7.
Neurosci Lett ; 696: 162-167, 2019 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-30557595

RESUMO

In human music, the tonality (key) may change to punctuate sectional structures and to produce emotional effects. A tonality change would sound "smoother" when it is supported by appropriate harmony. This functional magnetic resonance imaging study examined the neural substrates of the processing of tonality change. We used a 2 × 2 factorial design with factors tonality change (tonality changed versus tonality unchanged) and harmonization (harmonized versus unharmonized). Participants were asked to covertly sing the pitch names in the movable-do system along with the heard melody. Repetitions of this melody were associated with or without a tonality change, with equal probability in a pseudo-random order. Our result demonstrated that tonality changes elicited increased activation in the left ventrolateral prefrontal cortex and left temporal pole. When a tonality change occurred, the left ventrolateral prefrontal cortex might underpin the cognitive control for retrieving the pitch-naming rule of the new tonality, whereas the left temporal pole might integrate the melodic/harmonic context and emotional meanings of music. This study provides a new insight into the cognitive and emotional processing of music.


Assuntos
Percepção Auditiva/fisiologia , Música , Percepção da Altura Sonora/fisiologia , Córtex Pré-Frontal/fisiologia , Lobo Temporal/fisiologia , Adulto , Córtex Cerebral/fisiologia , Feminino , Audição/fisiologia , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Memória/fisiologia , Adulto Jovem
8.
Neuropsychologia ; 119: 118-127, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30056054

RESUMO

Humans use time-varying pitch patterns to convey information in music and speech. Recognition of musical melodies and lexical tones relies on relative pitch (RP), the ability to identify intervals between two pitches. RP processing in music is usually more fine-grained than that in tonal languages. In Western music, there are twelve pitch categories within an octave, whereas there are only three level (non-glide) lexical tones in Taiwanese (or Taiwanese Hokkien, a tonal language). The present study aimed at comparing the neural substrates underlying RP processing of musical melodic intervals with that of level lexical tones in Taiwanese. Functional magnetic resonance imaging data from fourteen participants with good RP were analyzed. The results showed that imagining the sounds of visually presented musical intervals was associated with enhanced activity in the central subregion of the right dorsal premotor cortex (dPMC), right posterior parietal cortex (PPC), and right dorsal precuneus compared to auditory imagery of visually presented Taiwanese bi-character words with level lexical tones. During the sound-congruence-judgement task (auditory imagery of musical intervals or bi-character words, and subsequently judging if the imagined sounds were melodically congruent with heard sounds), the contrast of the musical minus linguistic conditions yielded activity in the bilateral dPMC-PPC network and dorsal precuneus, with the dPMC activated in the rostral subregion. The central dPMC and PPC may mediate the attention-based maintenance of pitch intervals, whereas the dorsal precuneus may support attention control and the spatial/sensorimotor processing of the fine-grained pitch structures of music. When judging the congruence between the imagined and heard musical intervals, the bilateral rostral dPMC may play a role in attention control, working memory, evaluation of motor activities, and monitoring mechanisms. Based on the findings of this study and recent studies of amusia, we suggest that higher order cognitive operations are critical to the more fine-grained pitch processing of musical melodies compared to lexical tones.


Assuntos
Córtex Motor/fisiologia , Música , Lobo Parietal/fisiologia , Percepção da Altura Sonora/fisiologia , Percepção da Fala/fisiologia , Adulto , Atenção/fisiologia , Mapeamento Encefálico , Feminino , Humanos , Julgamento/fisiologia , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
9.
Front Psychol ; 7: 182, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26925009

RESUMO

Although music and the emotion it conveys unfold over time, little is known about how listeners respond to shifts in musical emotions. A special technique in heavy metal music utilizes dramatic shifts between loud and soft passages. Loud passages are penetrated by distorted sounds conveying aggression, whereas soft passages are often characterized by a clean, calm singing voice and light accompaniment. The present study used heavy metal songs and soft sea sounds to examine how female listeners' respiration rates and heart rates responded to the arousal changes associated with auditory stimuli. The high-frequency power of heart rate variability (HF-HRV) was used to assess cardiac parasympathetic activity. The results showed that the soft passages of heavy metal songs and soft sea sounds expressed lower arousal and induced significantly higher HF-HRVs than the loud passages of heavy metal songs. Listeners' respiration rate was determined by the arousal level of the present music passage, whereas the heart rate was dependent on both the present and preceding passages. Compared with soft sea sounds, the loud music passage led to greater deceleration of the heart rate at the beginning of the following soft music passage. The sea sounds delayed the heart rate acceleration evoked by the following loud music passage. The data provide evidence that sound-induced parasympathetic activity affects listeners' heart rate in response to the following music passage. These findings have potential implications for future research on the temporal dynamics of musical emotions.

10.
Brain Res ; 1629: 160-70, 2015 Dec 10.
Artigo em Inglês | MEDLINE | ID: mdl-26499261

RESUMO

Artificial rewards, such as visual arts and music, produce pleasurable feelings. Popular songs in the verse-chorus form provide a useful model for understanding the neural mechanisms underlying the processing of artificial rewards, because the chorus is usually the most rewarding element of a song. In this functional magnetic resonance imaging (fMRI) study, the stimuli were excerpts of 10 popular songs with a tensioned verse-to-chorus transition. We examined the neural correlates of three phases of reward processing: (1) reward-anticipation during the verse-to-chorus transition, (2) reward-gain during the first phrase of the chorus, and (3) reward-loss during the unexpected noise followed by the verse-to-chorus transition. Participants listened to these excerpts in a risk-reward context because the verse was followed by either the chorus or noise with equal probability. The results showed that reward-gain and reward-loss were associated with left- and right-biased temporoparietal junction activation, respectively. The bilateral temporoparietal junctions were active during reward-anticipation. Moreover, we observed left-biased lateral orbitofrontal activation during reward-anticipation, whereas the medial orbitofrontal cortex was activated during reward-gain. The findings are discussed in relation to the cognitive and emotional aspects of reward processing.


Assuntos
Antecipação Psicológica/fisiologia , Córtex Cerebral/fisiologia , Música , Lobo Parietal/fisiologia , Recompensa , Assunção de Riscos , Lobo Temporal/fisiologia , Estimulação Acústica , Adulto , Percepção Auditiva/fisiologia , Emoções/fisiologia , Feminino , Humanos , Masculino , Música/psicologia , Adulto Jovem
11.
Front Hum Neurosci ; 9: 455, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26347638

RESUMO

In human cultures, the perceptual categorization of musical pitches relies on pitch-naming systems. A sung pitch name concurrently holds the information of fundamental frequency and pitch name. These two aspects may be either congruent or incongruent with regard to pitch categorization. The present study aimed to compare the neuromagnetic responses to musical and verbal stimuli for congruency judgments, for example a congruent pair for the pitch C4 sung with the pitch name do in a C-major context (the pitch-semantic task) or for the meaning of a word to match the speaker's identity (the voice-semantic task). Both the behavioral data and neuromagnetic data showed that congruency detection of the speaker's identity and word meaning was slower than that of the pitch and pitch name. Congruency effects of musical stimuli revealed that pitch categorization and semantic processing of pitch information were associated with P2m and N400m, respectively. For verbal stimuli, P2m and N400m did not show any congruency effect. In both the pitch-semantic task and the voice-semantic task, we found that incongruent stimuli evoked stronger slow waves with the latency of 500-600 ms than congruent stimuli. These findings shed new light on the neural mechanisms underlying pitch-naming processes.

12.
Front Psychol ; 6: 1160, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26300835

RESUMO

In real life, listening to music may be associated with an eyes-closed or eyes-open state. The effect of eye state on listeners' reaction to music has attracted some attention, but its influence on brain activity has not been fully investigated. The present study aimed to evaluate the electroencephalographic (EEG) markers for the emotional valence of music in different eye states. Thirty participants listened to musical excerpts with different emotional content in the eyes-closed and eyes-open states. The results showed that participants rated the music as more pleasant or with more positive valence under an eyes-open state. In addition, we found that the alpha asymmetry indices calculated on the parietal and temporal sites reflected emotion valence in the eyes-closed and eyes-open states, respectively. The theta power in the frontal area significantly increased while listening to emotional-positive music compared to emotional-negative music under the eyes-closed condition. These effects of eye states on EEG markers are discussed in terms of brain mechanisms underlying attention and emotion.

13.
Eur J Neurosci ; 35(4): 634-43, 2012 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22330101

RESUMO

Sounds of hammering or clapping can evoke simulation of the arm movements that have been previously associated with those sounds. This audio-motor transformation also occurs at the sequential level and plays a role in speech and music processing. The present study aimed to demonstrate how the activation pattern of the sensorimotor network was modulated by the sequential nature of the auditory input and effector. Fifteen skilled drum set players participated in our functional magnetic resonance imaging study. Prior to the scan, these drummers practiced six drumming grooves. During the scan, there were four rehearsal conditions: covertly playing the drum set under the guidance of its randomly-presented isolated stroke sounds, covertly playing the drum set along with the sounds of learned percussion music, covertly reciting the syllable representation along with this music, and covertly reciting along with the syllable representation of this music. We found greater activity in the bilateral posterior middle temporal gyri for active listening to isolated drum strokes than for active listening to learned drum music. These regions might mediate the one-to-one mappings from sounds to limb movements. Compared with subvocal rehearsals along with learned drum music, covert rehearsals of limb movements along with the same music additionally activated a lateral subregion of the left posterior planum temporale. Our results illustrate a functional specialization of the posterior temporal lobes for audio-motor processing.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Música , Desempenho Psicomotor/fisiologia , Especialização , Lobo Temporal/irrigação sanguínea , Estimulação Acústica , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Lobo Temporal/fisiologia , Adulto Jovem
14.
Brain Cogn ; 74(2): 123-31, 2010 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-20727651

RESUMO

Numerous music cultures use nonsense syllables to represent percussive sounds. Covert reciting of these syllable sequences along with percussion music aids active listeners in keeping track of music. Owing to the acoustic dissimilarity between the representative syllables and the referent percussive sounds, associative learning is necessary for the oral representation of percussion music. We used functional magnetic resonance imaging (fMRI) to explore the neural processes underlying oral rehearsals of music. There were four music conditions in the experiment: (1) passive listening to unlearned percussion music, (2) active listening to learned percussion music, (3) active listening to the syllable representation of (2), and (4) active listening to learned melodic music. Our results specified two neural substrates of the association mechanisms involved in the oral representation of percussion music. First, information integration of heard sounds and the auditory consequences of subvocal rehearsals may engage the right planum temporale during active listening to percussion music. Second, mapping heard sounds to articulatory and laryngeal gestures may engage the left middle premotor cortex.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Música , Neurônios/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Mapeamento Encefálico , Feminino , Lateralidade Funcional/fisiologia , Humanos , Imageamento por Ressonância Magnética , Masculino
15.
Ultrasound Med Biol ; 35(11): 1812-8, 2009 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-19716224

RESUMO

We used B-mode imaging to study the vibratory phenomena of the vocal folds. The presence of multilayered structures of the vocal folds in the B-mode image was verified by using freshly excised human larynges in vitro. To capture images of vocal fold vibration, a special treatment was used to reconstruct the aliasing B-mode motion pictures of vocal fold vibration. Echo-particle image velocimetry (Echo-PIV) analysis was then applied to trace the tissue particles in the motion pictures. The vibratory behavior of the body (vocal ligament and muscle) of the vocal folds was revealed. Further analysis showed a quasi-longitudinal wave along the body of the vocal folds in the coronal plane. This is, to the best of our knowledge, the first time that vocal fold vibration physiology has been studied using B-mode imaging and Echo-PIV.


Assuntos
Fonação/fisiologia , Vibração , Prega Vocal/diagnóstico por imagem , Adulto , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Filmes Cinematográficos , Ultrassonografia , Prega Vocal/fisiologia , Adulto Jovem
16.
J Voice ; 22(3): 275-82, 2008 May.
Artigo em Inglês | MEDLINE | ID: mdl-17509826

RESUMO

The objective of this study was to investigate the underlying laryngeal mechanisms during the specific human 4-kHz vocalization. The laryngeal configuration during this vocalization was measured using high-resolution computerized tomographic scan and videostrobolaryngoscopy. The color Doppler imaging (CDI) of medical ultrasound was used to detect the vibrations of glottal and supraglottal mucosa. During the 4-kHz vocalization, the ventricular folds were adducted in the shape of a bimodal chink and the vocal folds were shaped as a "V" with an opening at the posterior glottis. In the coronal view, the laryngeal ventricles had collapsed and a divergent shaped conduit was observed at the posterior portion of the larynx. The surface mucosa vibration detected by CDI was noted over the bilateral ventricular folds and aryepiglottic folds. The vibration displacement was estimated to be on the order of 0.1mm. This vibration amplitude was too small to be detected in videostrobolaryngoscopy. The laryngeal configuration and CDI data suggested a diffuser jet with periodic vorticity bursts in the larynx producing 4 kHz voice.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Laringoscopia , Laringe/fisiologia , Espectrografia do Som , Estroboscopia , Tomografia Computadorizada Espiral , Ultrassonografia Doppler em Cores , Gravação em Vídeo , Qualidade da Voz/fisiologia , Adulto , Humanos , Mucosa Laríngea/fisiologia , Masculino , Fonação/fisiologia , Ventilação Pulmonar/fisiologia , Vibração , Prega Vocal/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...